Adaptive primal-dual stochastic gradient method for expectation-constrained convex stochastic programs

نویسندگان

چکیده

Stochastic gradient methods (SGMs) have been widely used for solving stochastic optimization problems. A majority of existing works assume no constraints or easy-to-project constraints. In this paper, we consider convex problems with expectation For these problems, it is often extremely expensive to perform projection onto the feasible set. Several SGMs in literature can be applied solve expectation-constrained We propose a novel primal-dual type SGM based on Lagrangian function. Different from methods, our method incorporates an adaptiveness technique speed up convergence. At each iteration, inquires unbiased subgradient function, and then renews primal variables by adaptive-SGM update dual vanilla-SGM update. show that proposed has convergence rate $$O(1/\sqrt{k})$$ terms objective error constraint violation. Although same as those SGMs, observe its significantly faster than non-adaptive Neyman–Pearson classification quadratically constrained quadratic programs. Furthermore, modify convex–concave minimax which updates both variables. also established modified gap. Our code released at https://github.com/RPI-OPT/APriD .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Primal-Dual Decomposition Algorithm for Multistage Stochastic Convex Programming

This paper presents a new and high performance solution method for multistage stochastic convex programming. Stochastic programming is a quantitative tool developed in the field of optimization to cope with the problem of decision-making under uncertainty. Among others, stochastic programming has found many applications in finance, such as asset-liability and bond-portfolio management. However,...

متن کامل

Adaptive discretization of convex multistage stochastic programs

min x FI(x,ξ) := ∑ i∈I pi f (xi,ξi) s.t. gt(n)(x 1, . . . ,xn,ζn) ≤ 0, xn ∈ Xt(n), n ∈ N(I) f (x,ξ), gt(x,ξ) convex in x, Xt convex, t = 1, . . . ,T Notation ξi i ∈ I scenarios of stoch. process ξ ξn := ξi,t(n) ζn := (ξ1, . . . ,ξn) xi i ∈ I decision vector for scenario i xn := xi,t(n) pi i ∈ I scenario probabilities N(I) nodes of scenario tree defined by scenarios in I t(n) timestage of node n...

متن کامل

Stochastic Successive Convex Approximation for Non-Convex Constrained Stochastic Optimization

This paper proposes a constrained stochastic successive convex approximation (CSSCA) algorithm to find a stationary point for a general non-convex stochastic optimization problem, whose objective and constraint functions are nonconvex and involve expectations over random states. The existing methods for non-convex stochastic optimization, such as the stochastic (average) gradient and stochastic...

متن کامل

Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization

We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate method, which alternates between maximizing over one (or more) randomly chosen dual variable and minimizing over the primal variab...

متن کامل

Stochastic Inertial primal-dual algorithms

We propose and study a novel stochastic inertial primal-dual approach to solve composite optimization problems. These latter problems arise naturally when learning with penalized regularization schemes. Our analysis provide convergence results in a general setting, that allows to analyze in a unified framework a variety of special cases of interest. Key in our analysis is considering the framew...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming Computation

سال: 2022

ISSN: ['1867-2957', '1867-2949']

DOI: https://doi.org/10.1007/s12532-021-00214-w